Goto

Collaborating Authors

 daniel kahneman


A Framework for Studying AI Agent Behavior: Evidence from Consumer Choice Experiments

Cherep, Manuel, Ma, Chengtian, Xu, Abigail, Shaked, Maya, Maes, Pattie, Singh, Nikhil

arXiv.org Artificial Intelligence

Environments built for people are increasingly operated by a new class of economic actors: LLM-powered software agents making decisions on our behalf. These decisions range from our purchases to travel plans to medical treatment selection. Current evaluations of these agents largely focus on task competence, but we argue for a deeper assessment: how these agents choose when faced with realistic decisions. We introduce ABxLab, a framework for systematically probing agentic choice through controlled manipulations of option attributes and persuasive cues. We apply this to a realistic web-based shopping environment, where we vary prices, ratings, and psychological nudges, all of which are factors long known to shape human choice. We find that agent decisions shift predictably and substantially in response, revealing that agents are strongly biased choosers even without being subject to the cognitive constraints that shape human biases. This susceptibility reveals both risk and opportunity: risk, because agentic consumers may inherit and amplify human biases; opportunity, because consumer choice provides a powerful testbed for a behavioral science of AI agents, just as it has for the study of human behavior. We release our framework as an open benchmark for rigorous, scalable evaluation of agent decision-making.


A Comprehensive Evaluation of Cognitive Biases in LLMs

Malberg, Simon, Poletukhin, Roman, Schuster, Carolin M., Groh, Georg

arXiv.org Artificial Intelligence

We present a large-scale evaluation of 30 cognitive biases in 20 state-of-the-art large language models (LLMs) under various decision-making scenarios. Our contributions include a novel general-purpose test framework for reliable and large-scale generation of tests for LLMs, a benchmark dataset with 30,000 tests for detecting cognitive biases in LLMs, and a comprehensive assessment of the biases found in the 20 evaluated LLMs. Our work confirms and broadens previous findings suggesting the presence of cognitive Figure 1: An LLM changes its answer as the framing of biases in LLMs by reporting evidence of all the decision changes, indicating the susceptibility of the 30 tested biases in at least some of the 20 LLM to the Framing Effect.


Rolling in the deep of cognitive and AI biases

Vakali, Athena, Tantalaki, Nicoleta

arXiv.org Artificial Intelligence

Nowadays, we delegate many of our decisions to Artificial Intelligence (AI) that acts either in solo or as a human companion in decisions made to support several sensitive domains, like healthcare, financial services and law enforcement. AI systems, even carefully designed to be fair, are heavily criticized for delivering misjudged and discriminated outcomes against individuals and groups. Numerous work on AI algorithmic fairness is devoted on Machine Learning pipelines which address biases and quantify fairness under a pure computational view. However, the continuous unfair and unjust AI outcomes, indicate that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed. Although, the synergy of humans and machines seems imperative to make AI work, the significant impact of human and societal factors on AI bias is currently overlooked. We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview. Inspired by the cognitive science definition and taxonomy of human heuristics, we identify how harmful human actions influence the overall AI lifecycle, and reveal human to AI biases hidden pathways. We introduce a new mapping, which justifies the human heuristics to AI biases reflections and we detect relevant fairness intensities and inter-dependencies. We envision that this approach will contribute in revisiting AI fairness under deeper human-centric case studies, revealing hidden biases cause and effects.


If Ray Kurzweil Is Right (Again), You'll Meet His Immortal Soul in the Cloud

WIRED

The 76-year-old scientist and engineer has spent much of his time on earth arguing that humans can not only take advantage of yet-to-be-invented medical advances to live longer, but also ultimately merge with machines, become hyperintelligent, and stick around indefinitely. Just minutes before we met, we both learned that Daniel Kahneman, the Nobel Prize–winning psychologist and one of Kurzweil's intellectual jousting partners, had suffered that fate. A few days before that, the science fiction author Vernor Vinge had also passed. Vinge's novels first described the singularity, that moment when superintelligent AI surpasses what humans can do and mere mortals need high-tech augmentation themselves to remain relevant. Kurzweil embraced the name for his own grand vision, and in 2005 wrote a best-selling book called The Singularity Is Near.


Causal Perception

Alvarez, Jose M., Ruggieri, Salvatore

arXiv.org Artificial Intelligence

Perception occurs when two individuals interpret the same information differently. Despite being a known phenomenon with implications for bias in decision-making, as individuals' experience determines interpretation, perception remains largely overlooked in automated decision-making (ADM) systems. In particular, it can have considerable effects on the fairness or fair usage of an ADM system, as fairness itself is context-specific and its interpretation dependent on who is judging. In this work, we formalize perception under causal reasoning to capture the act of interpretation by an individual. We also formalize individual experience as additional causal knowledge that comes with and is used by an individual. Further, we define and discuss loaded attributes, which are attributes prone to evoke perception. Sensitive attributes, such as gender and race, are clear examples of loaded attributes. We define two kinds of causal perception, unfaithful and inconsistent, based on the causal properties of faithfulness and consistency. We illustrate our framework through a series of decision-making examples and discuss relevant fairness applications. The goal of this work is to position perception as a parameter of interest, useful for extending the standard, single interpretation ADM problem formulation.


BIASeD: Bringing Irrationality into Automated System Design

Gulati, Aditya, Lozano, Miguel Angel, Lepri, Bruno, Oliver, Nuria

arXiv.org Artificial Intelligence

Human perception, memory and decision-making are impacted by tens of cognitive biases and heuristics that influence our actions and decisions. Despite the pervasiveness of such biases, they are generally not leveraged by today's Artificial Intelligence (AI) systems that model human behavior and interact with humans. In this theoretical paper, we claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases. We propose the need for a research agenda on the interplay between human cognitive biases and Artificial Intelligence. We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.



My reviews on Machine Learning, Data Science and Statistics books

#artificialintelligence

I receive questions on content that explains machine learning, statistics or data science on a daily basis. I usually learn from books, so I wanted to write a post about the resources I used, some of them e-books. I've finished some of the books, and some of them are in queue. I've categorized the books according to topic it covers, and it's important to note that each book suits you according to your background, e.g. Without further ado, let's review!


AI Is Everywhere -- Should We Be Excited or Concerned?

#artificialintelligence

Wherever you turn, artificial intelligence is showing up in new technology products and services across almost all industries. Here are a few examples of news headlines from just one day last week: Bloomberg: "Google Adds a Suite of New AI Tech for Photos and More" "Google's annual I/O conference kicked off on Tuesday and the company showed off all the ways it's using artificial intelligence to make our family memories more vivid, to make smartphone cameras less racist, and potentially to even save lives." BBC: "The Navy sub commanded by artificial intelligence" "MSubs of Plymouth, a specialist in autonomous underwater vehicles, won a £2.5m Ministry of Defence contract to build and test an Extra-Large Unmanned Underwater Vehicle (XLUUV) that should be able to operate up to 3,000 miles from home for three months. "The big innovation here is the autonomy.


Nobel Winner: Artificial Intelligence Will Crush Humans, "It's Not Even Close"

#artificialintelligence

It's common knowledge, at this point, that artificial intelligence will soon be capable of outworking humans -- if not entirely outmoding them -- in plenty of areas. How much we'll be outworked and outmoded, and on what scale, is still up for debate. But in a new interview published by The Guardian over the weekend, Nobel Prize winner Daniel Kahneman had a fairly hot take on the matter: In the battle between AI and humans, he said, it's going to be an absolute blowout -- and humans are going to get creamed. "Clearly AI is going to win [against human intelligence]. It's not even close," Kahneman told the paper.

  Genre: Personal > Honors > Award (0.71)